Face Generation

In this project, you'll use generative adversarial networks to generate new images of faces.

Get the Data

You'll be using two datasets in this project:

  • MNIST
  • CelebA

Since the celebA dataset is complex and you're doing GANs in a project for the first time, we want you to test your neural network on MNIST before CelebA. Running the GANs on MNIST will allow you to see how well your model trains sooner.

If you're using FloydHub, set data_dir to "/input" and use the FloydHub data ID "R5KrjnANiKVhLWAkpXhNBe".

In [1]:
data_dir = './data'

# FloydHub - Use with data ID "R5KrjnANiKVhLWAkpXhNBe"
#data_dir = '/input'


"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper

helper.download_extract('mnist', data_dir)
helper.download_extract('celeba', data_dir)
Found mnist Data
Found celeba Data

Explore the Data

MNIST

As you're aware, the MNIST dataset contains images of handwritten digits. You can view the first number of examples by changing show_n_images.

In [2]:
show_n_images = 25

"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
%matplotlib inline
import os
from glob import glob
from matplotlib import pyplot

mnist_images = helper.get_batch(glob(os.path.join(data_dir, 'mnist/*.jpg'))[:show_n_images], 28, 28, 'L')
pyplot.imshow(helper.images_square_grid(mnist_images, 'L'), cmap='gray')
Out[2]:
<matplotlib.image.AxesImage at 0x10495b128>

CelebA

The CelebFaces Attributes Dataset (CelebA) dataset contains over 200,000 celebrity images with annotations. Since you're going to be generating faces, you won't need the annotations. You can view the first number of examples by changing show_n_images.

In [3]:
show_n_images = 25

"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
mnist_images = helper.get_batch(glob(os.path.join(data_dir, 'img_align_celeba/*.jpg'))[:show_n_images], 28, 28, 'RGB')
pyplot.imshow(helper.images_square_grid(mnist_images, 'RGB'))
Out[3]:
<matplotlib.image.AxesImage at 0x1053f4160>

Preprocess the Data

Since the project's main focus is on building the GANs, we'll preprocess the data for you. The values of the MNIST and CelebA dataset will be in the range of -0.5 to 0.5 of 28x28 dimensional images. The CelebA images will be cropped to remove parts of the image that don't include a face, then resized down to 28x28.

The MNIST images are black and white images with a single color channel while the CelebA images have 3 color channels (RGB color channel).

Build the Neural Network

You'll build the components necessary to build a GANs by implementing the following functions below:

  • model_inputs
  • discriminator
  • generator
  • model_loss
  • model_opt
  • train

Check the Version of TensorFlow and Access to GPU

This will check to make sure you have the correct version of TensorFlow and access to a GPU

In [4]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf

# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer.  You are using {}'.format(tf.__version__)
print('TensorFlow Version: {}'.format(tf.__version__))

# Check for a GPU
if not tf.test.gpu_device_name():
    warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
    print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
TensorFlow Version: 1.1.0
/Users/Rossonero/anaconda/envs/py36/lib/python3.6/site-packages/ipykernel/__main__.py:14: UserWarning: No GPU found. Please use a GPU to train your neural network.

Input

Implement the model_inputs function to create TF Placeholders for the Neural Network. It should create the following placeholders:

  • Real input images placeholder with rank 4 using image_width, image_height, and image_channels.
  • Z input placeholder with rank 2 using z_dim.
  • Learning rate placeholder with rank 0.

Return the placeholders in the following the tuple (tensor of real input images, tensor of z data)

In [5]:
import problem_unittests as tests

def model_inputs(image_width, image_height, image_channels, z_dim):
    """
    Create the model inputs
    :param image_width: The input image width
    :param image_height: The input image height
    :param image_channels: The number of image channels
    :param z_dim: The dimension of Z
    :return: Tuple of (tensor of real input images, tensor of z data, learning rate)
    """
    # TODO: Implement Function

    inputs = tf.placeholder(tf.float32, (None, image_width, image_height, image_channels), name='input_real')
    inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z')
    learning_rate = tf.placeholder(tf.float32, name='learning_rate')

    return inputs, inputs_z, learning_rate


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_inputs(model_inputs)
Tests Passed

Discriminator

Implement discriminator to create a discriminator neural network that discriminates on images. This function should be able to reuse the variables in the neural network. Use tf.variable_scope with a scope name of "discriminator" to allow the variables to be reused. The function should return a tuple of (tensor output of the discriminator, tensor logits of the discriminator).

In [6]:
def leaky_relu(x, alpha=0.05, name='leaky_relu'): 
    return tf.maximum(x, alpha * x, name=name)
In [20]:
def discriminator(images, reuse=False):
    """
    Create the discriminator network
    :param images: Tensor of input image(s)
    :param reuse: Boolean if the weights should be reused
    :return: Tuple of (tensor output of the discriminator, tensor logits of the discriminator)
    """
    # TODO: Implement Function
    alpha = 0.2
    with tf.variable_scope('discriminator', reuse=reuse):
        #input layer is 28 x 28 x 3
        x1 = tf.layers.conv2d(images, 64, 5, strides=2, kernel_initializer=tf.contrib.layers.xavier_initializer(), padding='same')
        x1 = leaky_relu(x1, alpha)
#         x1 = tf.layers.dropout(x1, rate=0.2, training=False)
        #14x14x64
        
        x2 = tf.layers.conv2d(x1, 128, 5, strides=2, kernel_initializer=tf.contrib.layers.xavier_initializer(), padding='same')
        x2 = tf.layers.batch_normalization(x2, training=True)
        x2 = leaky_relu(x2, alpha)
#         x2 = tf.layers.dropout(x2, rate=0.2, training=False)
        
        #7x7x128
        
        #x3 = tf.layers.conv2d(relu2, 256, 5, strides=2, padding='same')
        #bn3 = tf.layers.batch_normalization(x3, training=True)
        #relu3 = tf.maximum(alpha * bn3, bn3)
        #4x4x256
        
        # Flatten
        flat = tf.reshape(x2, (-1, 7*7*256))
        logits = tf.layers.dense(flat, 1)
        out = tf.sigmoid(logits)

        return out, logits


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_discriminator(discriminator, tf)
Tests Passed

Generator

Implement generator to generate an image using z. This function should be able to reuse the variables in the neural network. Use tf.variable_scope with a scope name of "generator" to allow the variables to be reused. The function should return the generated 28 x 28 x out_channel_dim images.

In [21]:
def generator(z, out_channel_dim, is_train=True):
    """
    Create the generator network
    :param z: Input z
    :param out_channel_dim: The number of channels in the output image
    :param is_train: Boolean if generator is being used for training
    :return: The tensor output of the generator
    """
    # TODO: Implement Function
    alpha=0.2
    with tf.variable_scope('generator', reuse=not is_train):
        #First fuuly connected layer
        x1 = tf.layers.dense(z, 7*7*512)
        #reshape to start the conv stack
        x1 = tf.reshape(x1, (-1, 7, 7, 512))
        x1 = tf.layers.batch_normalization(x1, training=is_train)
        x1 = leaky_relu(x1, alpha)
        #7x7x512 now
        
        x2 = tf.layers.conv2d_transpose(x1, 256, 5, kernel_initializer=tf.contrib.layers.xavier_initializer(), strides=2, padding='same')
        x2 = tf.layers.batch_normalization(x2, training=is_train)
        x2 = leaky_relu(x2, alpha)
#         x2 = tf.layers.dropout(x2, rate=0.2, training=is_train)
        # 14x14x256 now
        
        x3 = tf.layers.conv2d_transpose(x2, 128, 5, kernel_initializer=tf.contrib.layers.xavier_initializer(), strides=2, padding='same')
        x3 = tf.layers.batch_normalization(x3, training=is_train)
        x3 = leaky_relu(x3, alpha)
#         x3 = tf.layers.dropout(x3, rate=0.2, training=is_train)
        # 28x28x128 now
        
        # Output layer
        logits = tf.layers.conv2d_transpose(x3, out_channel_dim, 5, strides=1, padding='same')
        # 28x28x3 now
        
        out = tf.tanh(logits)
        
        return out


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_generator(generator, tf)
Tests Passed

Loss

Implement model_loss to build the GANs for training and calculate the loss. The function should return a tuple of (discriminator loss, generator loss). Use the following functions you implemented:

  • discriminator(images, reuse=False)
  • generator(z, out_channel_dim, is_train=True)
In [22]:
smooth = 0.1
def model_loss(input_real, input_z, out_channel_dim):
    """
    Get the loss for the discriminator and generator
    :param input_real: Images from the real dataset
    :param input_z: Z input
    :param out_channel_dim: The number of channels in the output image
    :return: A tuple of (discriminator loss, generator loss)
    """
    # TODO: Implement Function
    g_model = generator(input_z, out_channel_dim, is_train=True)
    d_model_real, d_logits_real = discriminator(input_real, reuse=False)
    d_model_fake, d_logits_fake = discriminator(g_model, reuse=True)
    
    d_loss_real = tf.reduce_mean(
                    tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=tf.ones_like(d_model_real)* (1 - smooth)))
    d_loss_fake = tf.reduce_mean(
                    tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.zeros_like(d_model_fake)))
    g_loss = tf.reduce_mean(
                    tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.ones_like(d_model_fake)))
    
    d_loss = d_loss_real + d_loss_fake
    return d_loss, g_loss


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_loss(model_loss)
Tests Passed

Optimization

Implement model_opt to create the optimization operations for the GANs. Use tf.trainable_variables to get all the trainable variables. Filter the variables with names that are in the discriminator and generator scope names. The function should return a tuple of (discriminator training operation, generator training operation).

In [23]:
def model_opt(d_loss, g_loss, learning_rate, beta1):
    """
    Get optimization operations
    :param d_loss: Discriminator loss Tensor
    :param g_loss: Generator loss Tensor
    :param learning_rate: Learning Rate Placeholder
    :param beta1: The exponential decay rate for the 1st moment in the optimizer
    :return: A tuple of (discriminator training operation, generator training operation)
    """
    # TODO: Implement Function
    t_vars = tf.trainable_variables()
    d_vars = [var for var in t_vars if var.name.startswith('discriminator')]
    g_vars = [var for var in t_vars if var.name.startswith('generator')]
    
    with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
        d_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(d_loss, var_list=d_vars)
        g_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(g_loss, var_list=g_vars)

    
    return d_train_opt, g_train_opt

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_opt(model_opt, tf)
Tests Passed

Neural Network Training

Show Output

Use this function to show the current output of the generator during training. It will help you determine how well the GANs is training.

In [24]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np

def show_generator_output(sess, n_images, input_z, out_channel_dim, image_mode):
    """
    Show example output for the generator
    :param sess: TensorFlow session
    :param n_images: Number of Images to display
    :param input_z: Input Z Tensor
    :param out_channel_dim: The number of channels in the output image
    :param image_mode: The mode to use for images ("RGB" or "L")
    """
    cmap = None if image_mode == 'RGB' else 'gray'
    z_dim = input_z.get_shape().as_list()[-1]
    example_z = np.random.uniform(-1, 1, size=[n_images, z_dim])

    samples = sess.run(
        generator(input_z, out_channel_dim, False),
        feed_dict={input_z: example_z})

    images_grid = helper.images_square_grid(samples, image_mode)
    pyplot.imshow(images_grid, cmap=cmap)
    pyplot.show()

Train

Implement train to build and train the GANs. Use the following functions you implemented:

  • model_inputs(image_width, image_height, image_channels, z_dim)
  • model_loss(input_real, input_z, out_channel_dim)
  • model_opt(d_loss, g_loss, learning_rate, beta1)

Use the show_generator_output to show generator output while you train. Running show_generator_output for every batch will drastically increase training time and increase the size of the notebook. It's recommended to print the generator output every 100 batches.

In [25]:
def train(epoch_count, batch_size, z_dim, learning_rate, beta1, get_batches, data_shape, data_image_mode):
    """
    Train the GAN
    :param epoch_count: Number of epochs
    :param batch_size: Batch Size
    :param z_dim: Z dimension
    :param learning_rate: Learning Rate
    :param beta1: The exponential decay rate for the 1st moment in the optimizer
    :param get_batches: Function to get batches
    :param data_shape: Shape of the data
    :param data_image_mode: The image mode to use for images ("RGB" or "L")
    """
    # TODO: Build Model

    print(data_shape)
    input_real, input_z, _ = model_inputs(data_shape[1], data_shape[2], data_shape[3], z_dim)
        
    d_loss, g_loss = model_loss(input_real, input_z,
                                              data_shape[3])        
    d_opt, g_opt = model_opt(d_loss, g_loss, learning_rate, beta1)
    
    saver = tf.train.Saver()

    losses = []
    steps = 0
    print_every = 50
    show_every = 50
    print('------start training------')
    
    with tf.Session() as sess:
        sess.run(tf.global_variables_initializer())
        for epoch_i in range(epoch_count):
            for batch_images in get_batches(batch_size):
                # TODO: Train Model
                steps += 1
                batch_images *= 2
                
                batch_z = np.random.uniform(-1, 1, size=(batch_size, z_dim))
                
                _ = sess.run(d_opt, feed_dict={input_real: batch_images, input_z: batch_z})
                _ = sess.run(g_opt, feed_dict={input_z: batch_z, input_real: batch_images})
                
                if steps % print_every == 0:
                    # At the end of each epoch, get the losses and print them out
                    train_loss_d = d_loss.eval({input_z: batch_z, input_real: batch_images})
                    train_loss_g = g_loss.eval({input_z: batch_z})

                    print("Epoch {}/{}...".format(steps, epochs),
                          "Discriminator Loss: {:.4f}...".format(train_loss_d),
                          "Generator Loss: {:.4f}".format(train_loss_g))
                    # Save losses to view after training
                    losses.append((train_loss_d, train_loss_g))

                if steps % show_every == 0:
                    show_generator_output(sess, 64, input_z, data_shape[3], data_image_mode)
                

MNIST

Test your GANs architecture on MNIST. After 2 epochs, the GANs should be able to generate images that look like handwritten digits. Make sure the loss of the generator is lower than the loss of the discriminator or close to 0.

In [26]:
#Learning Rate: ✔ Appropriate values are 1e-3/4e-4.
#Beta1: ✔ Appropriate values are 0.3/0.5.
#Alpha/ leak parameter: ✔ Appropriate values are ~0.2
#Z-dim: ✔ Appropriate values are 100.
#Batch Size: ✔ Appropriate batch sizes are 64/32.
batch_size = 64
z_dim = 100
learning_rate = 0.001
beta1 = 0.3

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
epochs = 2
tf.reset_default_graph()
mnist_dataset = helper.Dataset('mnist', glob(os.path.join(data_dir, 'mnist/*.jpg')))
with tf.Graph().as_default():
    train(epochs, batch_size, z_dim, learning_rate, beta1, mnist_dataset.get_batches,
          mnist_dataset.shape, mnist_dataset.image_mode)
(60000, 28, 28, 1)
------start training------
Epoch 50/2... Discriminator Loss: 1.5952... Generator Loss: 0.7682
Epoch 100/2... Discriminator Loss: 2.1078... Generator Loss: 0.3028
Epoch 150/2... Discriminator Loss: 1.9582... Generator Loss: 0.2869
Epoch 200/2... Discriminator Loss: 1.0300... Generator Loss: 1.4582
Epoch 250/2... Discriminator Loss: 1.2882... Generator Loss: 1.2911
Epoch 300/2... Discriminator Loss: 1.9013... Generator Loss: 0.2672
Epoch 350/2... Discriminator Loss: 1.4569... Generator Loss: 1.2383
Epoch 400/2... Discriminator Loss: 1.9440... Generator Loss: 0.2837
Epoch 450/2... Discriminator Loss: 1.3060... Generator Loss: 1.4989
Epoch 500/2... Discriminator Loss: 1.4122... Generator Loss: 0.5648
Epoch 550/2... Discriminator Loss: 1.5380... Generator Loss: 0.6236
Epoch 600/2... Discriminator Loss: 1.5035... Generator Loss: 0.4692
Epoch 650/2... Discriminator Loss: 1.3612... Generator Loss: 0.5982
Epoch 700/2... Discriminator Loss: 1.4920... Generator Loss: 1.3632
Epoch 750/2... Discriminator Loss: 1.2741... Generator Loss: 1.0117
Epoch 800/2... Discriminator Loss: 1.3817... Generator Loss: 0.6569
Epoch 850/2... Discriminator Loss: 1.0673... Generator Loss: 0.9403
Epoch 900/2... Discriminator Loss: 1.1882... Generator Loss: 1.0362
Epoch 950/2... Discriminator Loss: 1.3601... Generator Loss: 0.8829
Epoch 1000/2... Discriminator Loss: 1.3750... Generator Loss: 0.6635
Epoch 1050/2... Discriminator Loss: 1.3102... Generator Loss: 1.0152
Epoch 1100/2... Discriminator Loss: 1.3849... Generator Loss: 0.5504
Epoch 1150/2... Discriminator Loss: 1.6902... Generator Loss: 0.3920
Epoch 1200/2... Discriminator Loss: 1.2045... Generator Loss: 1.1357
Epoch 1250/2... Discriminator Loss: 2.0715... Generator Loss: 2.5905
Epoch 1300/2... Discriminator Loss: 1.1746... Generator Loss: 1.5861
Epoch 1350/2... Discriminator Loss: 1.3684... Generator Loss: 1.5074
Epoch 1400/2... Discriminator Loss: 1.3740... Generator Loss: 1.4803
Epoch 1450/2... Discriminator Loss: 1.2561... Generator Loss: 0.6701
Epoch 1500/2... Discriminator Loss: 0.9883... Generator Loss: 1.2653
Epoch 1550/2... Discriminator Loss: 1.2430... Generator Loss: 0.7707
Epoch 1600/2... Discriminator Loss: 1.4670... Generator Loss: 0.4868
Epoch 1650/2... Discriminator Loss: 1.2130... Generator Loss: 1.0506
Epoch 1700/2... Discriminator Loss: 1.4562... Generator Loss: 0.4710
Epoch 1750/2... Discriminator Loss: 1.2917... Generator Loss: 0.7608
Epoch 1800/2... Discriminator Loss: 1.5453... Generator Loss: 1.0407
Epoch 1850/2... Discriminator Loss: 1.3911... Generator Loss: 0.5343

CelebA

Run your GANs on CelebA. It will take around 20 minutes on the average GPU to run one epoch. You can run the whole epoch or stop when it starts to generate realistic faces.

In [31]:
batch_size = 64
z_dim = 100
learning_rate = 0.001
beta1 = 0.3

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
epochs = 1
tf.reset_default_graph()
celeba_dataset = helper.Dataset('celeba', glob(os.path.join(data_dir, 'img_align_celeba/*.jpg')))
with tf.Graph().as_default():
    train(epochs, batch_size, z_dim, learning_rate, beta1, celeba_dataset.get_batches,
          celeba_dataset.shape, celeba_dataset.image_mode)
(202599, 28, 28, 3)
------start training------
Epoch 50/1... Discriminator Loss: 3.1060... Generator Loss: 0.2222
Epoch 100/1... Discriminator Loss: 1.2512... Generator Loss: 0.8046
Epoch 150/1... Discriminator Loss: 1.9838... Generator Loss: 2.9386
Epoch 200/1... Discriminator Loss: 1.3614... Generator Loss: 2.0381
Epoch 250/1... Discriminator Loss: 1.7465... Generator Loss: 1.4755
Epoch 300/1... Discriminator Loss: 2.3057... Generator Loss: 3.0667
Epoch 350/1... Discriminator Loss: 1.2820... Generator Loss: 0.7406
Epoch 400/1... Discriminator Loss: 1.7034... Generator Loss: 1.4527
Epoch 450/1... Discriminator Loss: 1.5394... Generator Loss: 0.7875
Epoch 500/1... Discriminator Loss: 1.4513... Generator Loss: 0.4844
Epoch 550/1... Discriminator Loss: 1.4276... Generator Loss: 0.5421
Epoch 600/1... Discriminator Loss: 2.0018... Generator Loss: 2.5805
Epoch 650/1... Discriminator Loss: 1.3975... Generator Loss: 0.6528
Epoch 700/1... Discriminator Loss: 1.8205... Generator Loss: 0.3144
Epoch 750/1... Discriminator Loss: 1.5147... Generator Loss: 0.5783
Epoch 800/1... Discriminator Loss: 1.3613... Generator Loss: 0.9003
Epoch 850/1... Discriminator Loss: 1.5979... Generator Loss: 0.4679
Epoch 900/1... Discriminator Loss: 1.3686... Generator Loss: 0.9580
Epoch 950/1... Discriminator Loss: 1.5890... Generator Loss: 0.3933
Epoch 1000/1... Discriminator Loss: 0.8667... Generator Loss: 1.3988
Epoch 1050/1... Discriminator Loss: 1.1931... Generator Loss: 2.1710
Epoch 1100/1... Discriminator Loss: 1.7952... Generator Loss: 1.2317
Epoch 1150/1... Discriminator Loss: 2.1315... Generator Loss: 1.6710
Epoch 1200/1... Discriminator Loss: 1.4484... Generator Loss: 0.5593
Epoch 1250/1... Discriminator Loss: 1.3803... Generator Loss: 0.7423
Epoch 1300/1... Discriminator Loss: 1.4769... Generator Loss: 0.9884
Epoch 1350/1... Discriminator Loss: 2.1818... Generator Loss: 0.1927
Epoch 1400/1... Discriminator Loss: 1.2394... Generator Loss: 1.1401
Epoch 1450/1... Discriminator Loss: 1.6348... Generator Loss: 0.4029
Epoch 1500/1... Discriminator Loss: 1.4942... Generator Loss: 0.4850
Epoch 1550/1... Discriminator Loss: 1.7047... Generator Loss: 0.4370
Epoch 1600/1... Discriminator Loss: 1.4318... Generator Loss: 0.6408
Epoch 1650/1... Discriminator Loss: 1.3148... Generator Loss: 0.8004
Epoch 1700/1... Discriminator Loss: 1.4798... Generator Loss: 0.5549
Epoch 1750/1... Discriminator Loss: 1.6525... Generator Loss: 1.0588
Epoch 1800/1... Discriminator Loss: 1.4099... Generator Loss: 0.9939
Epoch 1850/1... Discriminator Loss: 1.3998... Generator Loss: 1.1379
Epoch 1900/1... Discriminator Loss: 1.4465... Generator Loss: 0.4683
Epoch 1950/1... Discriminator Loss: 1.2861... Generator Loss: 1.2587
Epoch 2000/1... Discriminator Loss: 1.3773... Generator Loss: 0.7747
Epoch 2050/1... Discriminator Loss: 1.3360... Generator Loss: 0.8786
Epoch 2100/1... Discriminator Loss: 1.5091... Generator Loss: 0.9828
Epoch 2150/1... Discriminator Loss: 1.5179... Generator Loss: 0.7170
Epoch 2200/1... Discriminator Loss: 1.3227... Generator Loss: 0.7542
Epoch 2250/1... Discriminator Loss: 1.5793... Generator Loss: 0.4818
Epoch 2300/1... Discriminator Loss: 1.3575... Generator Loss: 1.4322
Epoch 2350/1... Discriminator Loss: 1.4684... Generator Loss: 0.9527
Epoch 2400/1... Discriminator Loss: 1.2477... Generator Loss: 1.0558
Epoch 2450/1... Discriminator Loss: 1.2622... Generator Loss: 1.0764
Epoch 2500/1... Discriminator Loss: 1.5188... Generator Loss: 0.4828
Epoch 2550/1... Discriminator Loss: 1.4984... Generator Loss: 0.9056
Epoch 2600/1... Discriminator Loss: 1.7343... Generator Loss: 0.3576
Epoch 2650/1... Discriminator Loss: 1.4447... Generator Loss: 0.5481
Epoch 2700/1... Discriminator Loss: 1.5029... Generator Loss: 1.2396
Epoch 2750/1... Discriminator Loss: 1.5243... Generator Loss: 0.7971
Epoch 2800/1... Discriminator Loss: 1.7053... Generator Loss: 0.4279
Epoch 2850/1... Discriminator Loss: 1.3028... Generator Loss: 0.7965
Epoch 2900/1... Discriminator Loss: 1.4476... Generator Loss: 0.6341
Epoch 2950/1... Discriminator Loss: 1.3512... Generator Loss: 0.8439
Epoch 3000/1... Discriminator Loss: 1.3609... Generator Loss: 0.7192
Epoch 3050/1... Discriminator Loss: 1.4273... Generator Loss: 0.7173
Epoch 3100/1... Discriminator Loss: 1.8166... Generator Loss: 0.3056
Epoch 3150/1... Discriminator Loss: 1.5916... Generator Loss: 0.5197

Submitting This Project

When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "dlnd_face_generation.ipynb" and save it as a HTML file under "File" -> "Download as". Include the "helper.py" and "problem_unittests.py" files in your submission.

In [ ]: